16 research outputs found

    Understanding the fundamentals of bipedal locomotion in humans and robots

    Get PDF
    Walking is a robust and efficient method of moving around the world, which would greatly enhance the capabilities of humanoid robots, although they cannot match the performance of their biological counterparts. The highly nonlinear dynamics of locomotion create a vast state-action space, which makes model-based control difficult, yet biological humans are highly proficient and robust in their motion while operating under similar constraints. This disparity in performance naturally leads to the question: what can we learn about locomotion control by observing humans, and how can this be used to develop bio-inspired locomotion control in mechatronic humanoids? This thesis investigates bio-inspired locomotion control, but also explores the limitations of this approach and how we can use robotic platforms to move towards a better understanding of locomotion. We first present a methodology for measuring and analysing human locomotion behaviour, specifically disturbance recovery, and fit models to this complex behaviour to represent it in as simple as possible such that it can be easily translated into a simple controller for reactive motion. A minimum-jerk Model Predictive Control algorithm at the Centre of Mass (CoM) best captured human motion during multiple recovery strategies instead of using one controller for each strategy, which is common in this area. Capturing this simple CoM model of complex human behaviour shows that bio-inspiration can be an important tool for controller development, but behaviour varies between and even within individuals given similar initial conditions, which manifests as stochastic behaviour. Coupled with the ability to only measure expressed behaviours instead of direct control policies, this stochasticity presents a fundamental limit to using bio-inspiration for control purposes, as only indirect inferences can be made about a complex, stochastic system. To overcome these barriers, we investigate the use of mechatronic humanoid robots as a means to explore invariant aspects of the vast dynamic state-space of locomotion which are described by physical laws, and are therefore not subject to the stochastic behaviour of individual humans, that apply to both biological and mechatronic humanoid forms. We present a pipeline to explore the invariant energetics of humanoid robots during stepping for push recovery, where the most efficient stepping parameters are identified for a given initial CoM velocity and desired step length. Using this to explore the stepping state-space, our analysis finds a region of attraction between disturbance magnitude and optimal step length surrounded by a region of similarly efficient alternatives which corresponds to the stochastic behavior observed in humans during push recovery, which we would be unable to identify without reproducibility, direct access to internal measurements and known full body dynamics, which is not available in humans. We expand this paradigm further to investigate the invariant energetics of continuous walking using a full-body humanoid by exploring the state-space of step-length and step-timing to identify the most efficient sub-spaces of these parameters which describes the most efficient way to walk. Through analysis of this state-space, we provide evidence that the humanoid morphology exhibits a passive tendency towards energy-optimal motion and its dynamics follow a region of attraction towards Cost of Transport-optimal motion. Overall, these findings demonstrate the utility of robotics as a tool with which to explore certain aspects of legged locomotion and the results gained from our methodology suggest that humans do not need to explore a vast state-action space to learn to walk, they need only internalise simple heuristics for the natural dynamics of stepping that are easy to learn and can produce rapid, reactive and efficient stepping without costly decision-making processes

    Metrics for 3D Object Pointing and Manipulation in Virtual Reality

    Get PDF

    Study of Multimodal Interfaces and the Improvements on Teleoperation

    Get PDF

    Optimisation of Body-ground Contact for Augmenting Whole-Body Loco-manipulation of Quadruped Robots

    Get PDF
    Legged robots have great potential to perform loco-manipulation tasks, yet it is challenging to keep the robot balanced while it interacts with the environment. In this paper we study the use of additional contact points for maximising the robustness of loco-manipulation motions. Specifically, body-ground contact is studied for enhancing robustness and manipulation capabilities of quadrupedal robots. We propose to equip the robot with prongs: small legs rigidly attached to the body which ensure body-ground contact occurs in controllable point-contacts. The effect of these prongs on robustness is quantified by computing the Smallest Unrejectable Force (SUF), a measure of robustness related to Feasible Wrench Polytopes. We apply the SUF to assess the robustness of the system, and propose an effective approximation of the SUF that can be computed at near-real-time speed. We design a hierarchical quadratic programming based whole-body controller that controls stable interaction when the prongs are in contact with the ground. This novel concept of using prongs and the resulting control framework are all implemented on hardware to validate the effectiveness of the increased robustness and newly enabled loco-manipulation tasks, such as obstacle clearance and manipulation of a large object

    Decoding Motor Skills of AI and Human Policies:A Study on Humanoid and Human Balance Control

    Get PDF

    Next Best View Planning for Object Recognition in Mobile Robotics

    Get PDF
    Recognising objects in everyday human environments is a challenging task for autonomous mobile robots. However, actively planning the views from which an object might be perceived can significantly improve the overall task performance. In this paper we have designed, developed, and evaluated an approach for next best view planning. Our view planning approach is based on online aspect graphs and selects the next best view after having identified an initial object candidate. The approach has two steps. First, we analyse the visibility of the object candidate from a set of candidate views that are reachable by a robot. Secondly, we analyse the visibility of object features by projecting the model of the most likely object into the scene. Experimental results on a mobile robot platform show that our approach is (I) effective at finding a next view that leads to recognition of an object in 82.5% of cases, (II) able to account for visual occlusions in 85% of the trials, and (III) able to disambiguate between objects that share a similar set of features. Hence, overall, we believe that the proposed approach can provide a general methodology that is applicable to a range of tasks beyond object recognition such as inspection, reconstruction, and task outcome classification
    corecore